Research Synthesis Methods
○ Wiley
Preprints posted in the last 7 days, ranked by how well they match Research Synthesis Methods's content profile, based on 20 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.
Lin, T.; Li, Y.; Huang, Z.; Gui, T. T.; Wang, W.; Guo, Y.
Show abstract
Target trial emulation (TTE) offers a principled way to estimate treatment effects using real-world observational data, but analyses of time-varying treatment strategies remain vulnerable to immortal time bias. The clone-censor-weight (CCW) approach is increasingly used to address this problem, yet key aspects of its causal interpretation and implementation remain unclear. In this work, we emulate a target trial using electronic health records (EHRs) to compare completion of a 3-dose 9-valent human papillomavirus vaccination (HPV) series within 12 months versus remaining partially vaccinated among vaccine initiators. We link CCW to the classic potential outcome framework in causal inference, evaluate the role of different weighting mechanisms, and account for within-subject correlation induced by cloning using cluster-robust variance estimation. Our study provides practical guidance for applying CCW in real-world comparative effectiveness studies to address immortal time bias and supports more rigorous and interpretable treatment effect estimation in TTE.
Preston, J. D.; Abadiotakis, H.; Tang, A.; Rust, C. J.; Halkos, M. E.; Daneshmand, M. A.; Chan, J. L.
Show abstract
Clinical research dissemination is frequently hindered by administrative friction and methodological inconsistency. To address these barriers, we developed TernTables, a freely available, open-source web application (https://www.tern-tables.com/) and R package (https://cran.r-project.org/package=TernTables) that streamlines the transition from raw data to formatted results for descriptive and univariate clinical reporting. The system integrates a client-side screening protocol for protected health information (PHI) with a rule-based decision tree that selects and executes appropriate frequency-based, parametric, or non-parametric statistical tests based on data distribution and class. TernTables generates publication-ready summary tables in Microsoft Word format, complemented by dynamically generated methods text and the underlying R code to ensure complete transparency and reproducibility. Validation using a landmark clinical trial dataset demonstrated concordance with established biostatistical approaches for descriptive and univariate analyses. TernTables is designed to supplement, not replace, formal statistical consultation by standardizing routine descriptive and univariate workflows, allowing biostatistical expertise to be focused on complex analyses and study design. By lowering technical and financial barriers, the platform democratizes access to rigorous statistical workflows while maintaining methodological excellence and reducing "researcher degrees of freedom."
Chaves, E. T.; Teunis, J. T.; Digmayer Romero, V. H.; van Nistelrooij, N.; Vinayahalingam, S.; Sezen-Hulsmans, D.; Mendes, F. M.; Huysmans, M.-C.; Cenci, M. S.; Lima, G. d. S.
Show abstract
Background: Radiographic detection of caries lesions adjacent to restorations is challenging due to limitations of two-dimensional imaging and difficulties distinguishing true lesions from restorative or anatomical radiolucencies. Artificial intelligence (AI)-based clinical decision support systems (CDSSs) have been introduced to assist radiographic interpretation; however, different AI tools may yield variable diagnostic outputs, and their comparative performance remains unclear. Objective: To compare the diagnostic performance of commercial and experimental AI algorithms for detecting secondary caries lesions on bitewings. Methods: This cross-sectional diagnostic accuracy study included 200 anonymized bitewings comprising 885 restored tooth surfaces. A consensus group reference standard identified all surfaces with a caries lesion and classified each lesion by type (primary/secondary) and depth (enamel-only/dentin-involved). Five commercial (Second Opinion, CranioCatch, Diagnocat, DIO Inteligencia, and Align X-ray Insights) and three experimental (Mask R-CNN-based and Mask DINO-based) systems were tested. Diagnostic performance was expressed through sensitivity, specificity, and overall accuracy (95% CI). Comparisons used generalized estimating equations, adjusted for clustered data. Results: Specificity was high across all systems (0.957-0.986), confirming accurate recognition of non-carious surfaces, whereas sensitivity was moderate (0.327-0.487), reflecting frequent missed detections of enamel and dentin lesions. Accuracy ranged from 0.882 to 0.917, with no significant differences among models (p >= 0.05). Confounding factors, such as radiographic overlapping, marginal restoration defects, and cervical artifacts, were the main sources of misclassification. Conclusions: AI algorithms, regardless of architecture or commercial status, showed similar diagnostic capabilities and a conservative detection profile, favoring specificity over sensitivity. Improvements in dataset diversity, labeling precision, and explainability may further enhance reliability for secondary caries detection. Clinical Significance: AI-based CDSSs assist clinicians by providing consistent detection. Their high specificity is particularly valuable in minimizing unnecessary invasive treatments (overtreatment), though they should be used as adjuncts rather than a replacement for expert judgment.
Ferguson, D. J.
Show abstract
BackgroundClinical pharmacists, trainees, and educators rely on multi-database literature retrieval and structured evidence synthesis to answer drug-information questions. Existing workflows require navigation across PubMed, DailyMed, LactMed, interaction checkers, and specialty guideline repositories with manual de-duplication, appraisal, and synthesis. Commercial platforms that integrate these functions are costly and often unavailable in community, rural, and international training contexts. ObjectiveThis report describes the architecture of AuditMed, a single-file, browser-based clinical evidence audit platform, and reports preliminary stress-test results against a complex multi-morbidity case corpus. AuditMed is intended for research and educational use and is not a substitute for clinical judgment or validated commercial clinical decision-support systems. MethodsAuditMed integrates nineteen free, publicly available clinical and biomedical application programming interfaces into a six-stage Search [->] Select [->] Parse [->] Analyze [->] Infer [->] Create pipeline and supports browser-local patient-case ingestion with regex-based HIPAA Safe Harbor de-identification. Preliminary stress-testing was conducted against eleven cases (Cases 30 through 40) from the Complex Clinical Case Compendium Software Validation Suite, each featuring over twenty concurrent active disease states. For each case, the one-click inference pipeline was executed with default settings and the full Clinical Inference Report was captured verbatim. No retrieval-sensitivity, synthesis-fidelity, or time-to-answer endpoints were pre-specified; the exercise was qualitative and oriented toward pipeline behavior under extreme multi-morbidity. ResultsThe pipeline completed without fatal errors for all eleven cases and produced a structured Clinical Inference Report in each instance. Quantitative-finding detection performed as designed for hematologic parameters and cardiac biomarkers. Two parser defects were identified and are reproduced in the appendix: an age-as-fever regex-precedence defect affecting seven cases and a diagnosis-versus-medication parsing defect affecting one case. Evidence-linkage rate varied from zero evidence-linked statements in seven cases to eleven in one case, reflecting dependence of the inference layer on MeSH-indexed literature coverage of the specific case diagnoses. ConclusionsAuditMed is an early-stage, open-source platform whose value at this stage is in providing a free, transparent, auditable workflow for multi-source evidence synthesis with explicit uncertainty flagging. The preliminary results document both robust end-to-end completion under extreme case complexity and specific, reproducible parser defects that will be addressed before formal evaluation. Planned evaluation studies are described.
Farrell, G.; Attafi, O. A.; Fragkouli, S.-C.; Heredia, I.; Fernandez Tobias, S.; Harrison, M.; Hermjakob, H.; Jeffryes, M.; Obregon Ruiz, M.; Pearce, M.; Pechlivanis, N.; Lopez Garcia, A.; Psomopoulos, F.; Tosatto, S. C. E.
Show abstract
Unprecedented breakthroughs are being made in life science research through the application of artificial intelligence (AI). However, adherence to method reporting guidelines is necessary to support their reusability and reproducibility. The DOME Copilot solution extracts structured reports of AI methods using a large language model to help interpret manuscripts. It is a fast and efficient resource capable of scaling to annotate the corpus of global AI literature, unlocking value and trust in published methods.
Gada, L.; Afuleni, M. K.; Noble, M.; House, T.; Finnie, T.
Show abstract
Knowing the mortality rates associated with infection by a pathogen is essential for effective preparedness and response. Here, harnessing the flexibility of a Bayesian approach, we produce an estimate of the Infection Fatality Ratio (IFR) for A(H5N1) conditional on explicit assumptions, and quantify the uncertainty thereof. We also apply the method to first-wave COVID-19 data up to March 2020, demonstrating the estimates that could be obtained were the model available then. Our analysis uses World Development Indicators (WDI) from the World Bank, the A(H5N1) WHO confirmed cases and deaths tracker by country (2003-2024), and COVID-19 cases and deaths data from John Hopkins University (January and February 2020). Since infectious disease dynamics are typically influenced by local socio-economic factors rather than political borders, individual countries are placed within clusters of countries sharing similar WDIs relevant to respiratory viral diseases, with clusters derived by performing Hierarchical Clustering. To estimate the IFR, we fit a Negative Binomial Bayesian Hierarchical Model for A(H5N1) and COVID-19 separately. We explicitly modelled key unobserved parameters with informative priors from expert opinion and literature. By modelling underreporting, our analysis suggests lower fatality (15.3%) compared to WHO's Case Fatality Ratio estimate (54%) on lab-confirmed cases. However, credible intervals are wide ([0.5%, 64.2%] 95% CrI). Therefore, good preparedness for a potential A(H5N1) pandemic implies adopting scenario planning under our central estimate, as well as for IFRs as high as 70%. Our approach also returns a COVID-19 IFR estimate of 2.8% with [2.5%, 3.1%] 95% CrI which is consistent with literature.
Krause, J.; van Rij, J.; Borst, J. P.
Show abstract
Hidden (semi-) Markov Models (HsMMs) are increasingly being used to segment neurophysiological signals into sequences of latent cognitive processes. The idea: different processes will leave distinct traces in trial-level recordings of (multivariate) neuro-physiological signals. Markov models, equipped with an emission model of these traces and a latent process model describing the progression through the different latent processes involved in a task, can then be used to infer the most likely process for any time-point and trial. However, the currently used HsMMs remain limited in two important ways. First, they cannot account for subject-level heterogeneity in the latent and emission process. Instead, a single group-level model is assumed to explain the entire data. Second, they cannot account for the potentially non-linear effects of experimental covariates on the latent and emission process. To address these problems, we present a modeling framework in which the HsMM parameters of the emission and latent process are replaced with mixed additive models, including smooth functions of experimental covariates and random effects. We derive all necessary quantities for empirical Bayes and fully Bayesian inference for all parameters and provide a Python implementation of all estimation algorithms. To demonstrate the advantages offered by this framework, we apply such a multi-level model to an existing lexical decision dataset. We show that, even in such a simple task, not all subjects rely on the same processes equally and that at least two semi-Markov states, previously believed to reflect distinct processes, might actually relate to the same cognitive process.
Sionakidis, A.; Pinilla Alba, K.; Abraham, J.; Simidjievski, N.
Show abstract
Emerging multi-omic profiling has made it feasible to subtype disease using multiple molecular layers. However, inconsistent preprocessing, heterogeneous implementations, variable evaluation, and limited reproducibility often constrain method selection. Here, we systematically benchmark 22 publicly available unsupervised approaches for bulk data on the TCGA-BRCA cohort across five modalities (RNA-seq, miRNA, DNA methylation, copy numbers, single nucleotide polymorphisms) and validate findings in two independent datasets, enabling a multi-layered comparison of performance, heterogeneous data support and interpretability. Most approaches fuse multi-omic data to produce a two-cluster solution largely aligned with ER status, with higher-resolution approaches further refining these into four coherent subclasses (angiogenic luminal, oxidative-phosphorylation/HER2-low luminal, immune-inflamed basal-like, and hyper-proliferative basal-like). Our benchmarking results indicate that methods based on similarity networks can efficiently produce stable, reliable partitions. Matrix factorisation and Bayesian factorisation algorithms produce rich latent representations, allowing quantification of feature and modality contributions, albeit at higher computational cost. Consensus clustering can be used on a case-by-case basis and refine partitions into more robust and generalisable findings. We aggregate our insights into a decision workflow that aligns with study goals, data characteristics, and computational resources, enabling optimal analytic strategies. This comprehensive assessment provides a practical roadmap for investigators seeking to extract reproducible, biologically meaningful subtypes from complex multi-omic datasets. We higlight the different technical and practical benefits and trade-offs that shape the selection and development of multi-omic approaches applied in precision oncology.
Zhang, Z.; Liu, A. H.; Zhang, Z.
Show abstract
Brain network analysis has emerged as a critical framework for understanding the complex organization and function of the human brain, underpinning insights into cognition, behavior, and neuropsychiatric conditions. Central to this approach is the parcellation of the brain into discrete regions, which simplifies high-dimensional connectome data and facilitates the investigation of network architectures. However, the proliferation of brain parcellation schemes introduces significant challenges: different parcellations often yield varying network sizes and measures, complicating cross-study comparisons and the reproducibility of findings. Moreover, most connectome construction pipelines are rigid, typically outputting connectivity matrices from only one or a few parcellation schemes, which limits flexibility. In this paper, we address these issues by introducing BridgeBP, a novel toolbox designed to bridge brain parcellations by leveraging continuous brain connectivity concepts. BridgeBP transforms structural connectivity matrices derived from one parcellation scheme into matrices corresponding to more than 40 alternative schemes, standardizing analyses and enhancing the robustness of network studies. Through extensive evaluations, we demonstrate that BridgeBP enables consistent network comparisons across diverse parcellation frameworks, paving the way for more reproducible and generalizable insights in brain connectome research.
Gantenberg, J. R.; La Joie, R.; Heston, M. B.; Ackley, S. F.
Show abstract
Qualitative models of Alzheimers pathology often posit that amyloid accumulation follows a sigmoid curve, indicating that the rate of deposition wanes over time. Longitudinal PET data now allow us to investigate amyloid accumulation trajectories with greater detail and over longer follow-up periods. We combine inferences from simulated amyloid trajectories, empirical PET data from the Alzheimers Disease Neuroimaging Initiative (ADNI), and the sampled iterative local approximation algorithm (SILA) to assess whether amyloid accumulation reaches a physiologic ceiling. We find that SILA reliably detects a ceiling, when present, across a range of simulated scenarios that impose a sigmoid shape. When fit to empirical data from ADNI, however, SILA does not appear to indicate the presence of a ceiling. Thus, we conclude that amyloid trajectories may not reach a physiologic ceiling during the stages of Alzheimers disease typically observed while patients remain under follow-up in cohort studies. Fits using SILA indicate that illustrative models of biomarker cascades, while useful tools for conceptualizing and interrogating pathologic processes, may not represent the shapes of amyloid trajectories accurately. Summary for General PublicAmyloid, a protein implicated in Alzheimers disease, is thought to reach a plateau in the brain, but methods that estimate how amyloid changes over time suggest it grows unabated. Gantenberg et al. use one such method and simulations to argue that amyloid does not reach a plateau during the typical course of Alzheimers.
O'Mahony, D. G.; Beasley, J.; Zanti, M.; Dennis, J.; Dutta, D.; Kraft, P.; Kristensen, V.; Chenevix-Trench, G.; Easton, D. F.; Michailidou, K.
Show abstract
Summary statistics fine-mapping methods offer advantages over classical methods, including avoiding data-sharing constraints and improved modelling of correlated variables and sparse effects. However, its performance has not been comprehensively evaluated in breast cancer using real-world data. Previous multinomial stepwise regression (MNR) fine-mapping analyses for breast cancer identified 196 credible sets. Here, we apply summary statistics fine-mapping, compare methods, and assess parameters influencing performance. Using summary statistics from the Breast Cancer Association Consortium, we compared finiMOM, SuSiE, and FINEMAP to published MNR results across 129 regions. Performance was assessed by recall using in-sample and out-of-sample LD. Discordant credible sets were examined for technical factors, and target genes were defined using the INQUISIT pipeline. SuSiE showed the closest agreement with MNR. Results varied across regions depending on the assumed number of causal variants (L), with higher values reducing recall and no single L maximising performance. At optimal L per region, SuSiE identified 8,192 CCVs in 244 credible sets, with recall of 88%, 86%, and 72% for overall, ER-positive, and ER-negative breast cancer. Thirty MNR sets were missed. Discordance was partially explained by allele flips, imputation quality, and array heterogeneity. Fifty-two MNR-identified genes, including BRCA2, WNT7B and CREBBP were not recovered, while additional candidate genes were identified. Using out-of-sample LD reduced recall by 3% but identified novel variants. Fine-mapping results vary across methods, and no single approach is sufficient. The choice of L strongly influences results, and combining analytical approaches with functional validation can improve causal variant identification.
Madan, R.; Crane, P. K.; Gennari, J. H.; Latimer, C. S.; Choi, S.-E.; Grabowski, T. J.; Mac Donald, C. L.; Hunt, D.; Postupna, N.; Bajwa, T.; Webster, J.
Show abstract
1.Quantitative neuropathology has advanced through whole-slide imaging and digital histology platforms. Yet, these measurements rarely align with neuroimaging coordinate frameworks that may be useful for spatial modeling and other applications. QNPtoVox, short for quantitative neuropathology to voxels, is a reproducible, modular pipeline that transforms quantitative metrics generated by digital pathology software (HALO) into voxel-based maps registered to a standard common coordinate (MNI) template. The workflow integrates digital histopathology, gross tissue photography, ex-vivo MRI, and nonlinear registration to generate spatially standardized 3D pathology representations. This Methods article provides a complete procedural description, including required materials, step-wise instructions, operator-dependent checkpoints, expected outputs, reproducibility evaluation, and troubleshooting. QNPtoVox enables voxel-level integration of neuropathology with neuroimaging tools, unlocking existing histopathology datasets for computational modeling and cross-cohort harmonization.
Matthewman, J.; Denaxas, S.; Langan, S.; Painter, J. L.; Bate, A.
Show abstract
Objectives: Large language models (LLMs) have shown promise in creating clinical codelists for research purposes, a time-consuming task requiring expert domain knowledge. Here, we evaluate the performance and assess failure modes of a retrieval augmented generation (RAG) approach to creating clinical codelists for the large and complex medical terminology used by the Clinical Practice Research Datalink (CPRD). Materials & Methods: We set up a RAG system using a database of word embeddings of the medical terminology that we created using a general-purpose word embedding model (gemini-embedding). We developed 7 reference codelists presenting different challenges and tagged required and optional codes. We ran 168 evaluations (7 codelists, 2 different database subsets, 4 models, 3 epochs each). Scoring was based on the omission of required codes, and inclusion of irrelevant codes. We used model-grading (i.e., grading by another LLM with the reference codelists provided as context) to evaluate the output codelists (a score of 0% being all incorrect and 100% being all correct). Results: We saw varying accuracy across models and codelists, with Gemini 3 Pro (Score 43%) generally performing better than Claude Sonnet 4.6 (36%), Gemini 3 Flash, and OpenAI GPT 5.2 performing worst (14%). Models performed better with shorter target codelists (e.g., Eosinophilic esophagitis with four codes, and Hidradenitis suppurativa with 14 codes). For example, all models consistently failed to produce a complete Wrist fracture codelist (with 214 required codes). We further present evaluation summaries, and failure mode evaluations produced by parsing LLM chat logs. Discussion: Besides demonstrating that a single-shot RAG approach is currently not suitable for codelist generation, we demonstrate failure modes including hallucinations, retrieval failures and generation failures where retrieved codes are not used. Conclusions: Our findings suggest that while RAG systems using current frontier LLMs may create correct clinical codelists in some cases, they still struggle with large and complex terminologies and codelists with a large number of codes. The failure mode we highlight can inform the creation of future workflows to avoid failures.
Kamulegeya, R.; Nabatanzi, R.; Semugenze, D.; Mugala, F.; Takuwa, M.; Nasinghe, E.; Musinguzi, D.; Namiiro, S.; Katumba, A.; Ssengooba, W.; Nakatumba-Nabende, J.; Kivunike, F. N.; Kateete, D. P.
Show abstract
BackgroundTuberculosis (TB) remains a leading cause of infectious disease mortality worldwide, and treatment failure contributes to ongoing transmission, drug resistance, and poor clinical outcomes. Artificial intelligence and machine learning approaches have attracted growing interest for predicting tuberculosis treatment outcomes, but the literature is heterogeneous and lacks a comprehensive synthesis. MethodsWe conducted a systematic review and meta-analysis of studies that developed or validated machine learning models to predict TB treatment failure. We searched PubMed/MEDLINE and Embase from January 2000 to October 2025. Studies were eligible if they developed, validated, or implemented an artificial intelligence or machine learning model for the prediction of TB treatment failure or a closely related poor outcome in patients receiving anti-TB treatment. Risk of bias was assessed using the Prediction model Risk Of Bias Assessment Tool. Random-effects meta-analysis was performed to pool area under the curve values, with subgroup analyses and meta-regression to explore heterogeneity. ResultsThirty-four studies were included in the systematic review, of which 19 reported area under the curve values suitable for meta-analysis (total participants, 100,790). Studies were published between 2014 and 2025, with 91% published from 2019 onward. Tree-based methods were the most common algorithm family (52.9%), and multimodal models integrating three or more data types were used in 41.2% of studies. The pooled area under the curve was 0.836 (95% confidence interval 0.799-0.868), with substantial heterogeneity (I{superscript 2} = 97.9%). In subgroup analyses, studies including HIV-positive participants showed lower discrimination (pooled area under the curve 0.748) compared to those excluding them (0.924). Only eight studies (23.5%) performed external validation, and only one study (2.9%) was rated as low risk of bias overall, primarily due to methodological concerns in the analysis domain. Eggers test suggested publication bias (p = 0.024). Major evidence gaps included underrepresentation of high-burden countries, HIV-affected populations, social determinants, pediatric TB, and extrapulmonary disease. ConclusionsMachine learning models for predicting TB treatment failure show promising discrimination but are not yet ready for routine clinical implementation. Performance varies substantially across populations and settings, and methodological limitations, including inadequate validation, poor calibration assessment, and high risk of bias, limit confidence in current estimates. Future research should prioritize rigorous external validation, calibration assessment, and development in underrepresented populations, particularly HIV-affected and high-burden settings. Author SummaryTB kills over a million people annually. While curable, treatment failure remains common and drives ongoing transmission and drug resistance. Researchers increasingly use artificial intelligence and machine learning to predict which patients will fail treatment, but it is unclear if these models are ready for clinical use. We reviewed 34 studies including nearly 1.1 million participants from 22 countries. On average, models correctly distinguished patients who would fail treatment from those who would not 84% of the time, a performance generally considered good. However, this average hid enormous variation. Models developed in populations including HIV-positive people performed substantially worse, suggesting prediction is harder with HIV co-infection. Worryingly, only one study used high-quality methods; 97% had serious flaws in handling missing data, checking calibration, or testing in new populations. Only eight studies validated their models in different settings. To conclude, we found that machine learning is promising in predicting TB treatment failure, but it is not ready for clinical use. Researchers should prioritize validation in high-burden settings, include social determinants, and improve methodological rigor before these tools can help patients.
ENCISO DURAND, J. C.; Silva-Santisteban, A. A.; Reyes-Diaz, M.; Huicho, L.; Caceres, C. F.; LAMIS-2018,
Show abstract
Objectives: In Latin America, up-to-date information to monitor UNAIDS 95-95-95 HIV targets in key populations, such as men who have sex with men, is limited. Elsewhere, structural homophobia restricts access to ART. Conceptual frameworks suggest that intersecting forms of violence and discrimination may negatively influence HIV care outcomes through psychosocial and structural pathways, although empirical evidence remains limited. The study aimed to assess whether sexual orientation outness and recent homophobic violence are associated with not being on ART among Latin American MSM living with HIV. Methods: This cross-sectional study is a secondary analysis of data from LAMIS-2018, including 7,609 MSM aged 18+ with an HIV diagnosis [≥]1 year prior from 18 Latin American countries. Participants self-reported ART status, sociodemographic characteristics, homophobic violence, and sexual orientation outness. Bivariate and multivariate logistic regressions identified those factors associated with not being on ART. Results: Nine percent of MSM with HIV were not on ART, 18% reported low sexual orientation outness, and 27% experienced homophobic violence, especially in Andean and Central American countries. Not being on ART was associated with recent homophobic violence (aPR=1.25), low outness (aPR=1.22), unemployment (aPR=1.27), and residence in the Andean subregion (aPR=1.87), Mexico (aPR=1.28), or the Southern Cone (aPR=1.45) versus Brazil. Protective factors included being older (25-39: aPR=0.72; >39: aPR=0.49), living in large cities (aPR=0.72), having a stable partner (aPR=0.78), and university education (aPR=0.74). Conclusions: Recent homophobic violence and low sexual orientation outness were associated with not being on ART among MSM in Latin America. While access varies across countries, structural factors such as stigma and violence may limit engagement in care. Addressing these barriers alongside strengthening health systems may be key to improving ART uptake and advancing progress toward the 95-95-95 targets.
Yi, B.; Kim, H. Y.; Sotka, W.; Estey, R.; Green, S. J.; Shiau, H.
Show abstract
Gingival inflammation is associated with dysbiotic oral biofilms characterized by reduced nitrate-reducing capacity and diminished nitric oxide (NO) bioavailability. While dietary nitrate has been shown to influence oral microbial activity, the effects of sustained, localized nitrate delivery on oral biofilm ecology and gingival inflammation remain incompletely defined. In this randomized, double-blind, placebo-controlled trial, 30 adults with gingival bleeding were assigned to receive localized prebiotic nitrate (~0.989 mmol per dose) or placebo for 21 days. The primary outcome was mean bleeding on probing (mBOP). Secondary outcomes included modified Gingival Index (mGI), Quigley-Hein plaque index (QHPI), salivary nitrite (as a proxy for NO bioavailability), oral pH, and microbiome composition assessed by 16S rRNA gene sequencing. Prebiotic nitrate supplementation formulation delivered in a slow-release chewing gum significantly reduced mBOP (25.7% to 15.3%; p = 0.0002) compared to placebo chewing gum. Salivary nitrite levels and oral pH increased, indicating enhanced nitrate metabolism. Microbiome analysis demonstrated enrichment of nitrate-reducing taxa, including Rothia mucilaginosa and Neisseria spp., and a relative reduction in inflammation-associated genera such as Prevotella and Porphyromonas. Localized prebiotic nitrate formula delivered in a functional chewing gum was associated with reduced gingival inflammation and shifts in oral microbiome composition consistent with enhanced nitrate-reducing capacity critical in nitric oxide formation. These findings support a role for biofilm-directed nutritional modulation as a non-antimicrobial approach for managing gingival inflammation and improving nitric oxide bioavailability.
Malara, P.; Tosin, A. G.; Castellucci, A.; Martellucci, S.; Musumano, L. B.; Mandala, M.
Show abstract
An increasing number of studies highlight the role of saccadic remodulation in compensatory mechanisms following vestibular injury, and the reappearance of SHIMP saccades correlates with symptom improvement measured by the Dizziness Handicap Inventory (DHI). To investigate the influence of attentional processes and working memory on visuo-vestibular interaction, three independent but interrelated experiments were conducted. In the first two experiments, healthy subjects and patients with unilateral or bilateral vestibular deficits underwent vHIT in SHIMP mode and the Functional Head Impulse Test (fHIT), performed first separately and subsequently simultaneously. Mean latency and clustering of SHIMP saccades, together with Landolt C recognition rates, were analyzed. Differences between separate and combined protocols were assessed, and, in patients, correlated with symptom severity measured by the DHI, to determine whether the near-simultaneous execution of tasks mediated by shared parietal cortical substrates influenced performance. In the third experiment, vHIT in HIMP mode and fHIT were performed using separate and combined protocols to evaluate whether recognition-related cognitive load affected recovery saccade latency and clustering. Results suggest that visual recognition modulates visuo-vestibular interaction, supporting integrated dual-task protocols for ecological balance assessment and helping explain clinical discrepancies.
Wood Alexander, M.; Wood, B.; Oh, H. S.-H.; Bot, V. A.; Borger, J.; Galbiati, F.; Walker, K. A.; Resnick, S. M.; Ochs-Balcom, H. M.; Wyss-Coray, T.; Kooperberg, C.; Reiner, A. P.; Jacobs, E. G.; Rabin, J. S.; Casaletto, K. B.; Saloner, R.
Show abstract
Earlier menopause is a risk factor for several age-related diseases, including dementia. The biological pathways linking menopause timing to later-life brain aging are not understood. Leveraging large-scale plasma proteomics in postmenopausal women from the UK Biobank (N=15,012), earlier menopause was associated with upregulation of pro-inflammatory and extracellular matrix degradation pathways, plus accelerated aging across proteomic clocks of organ and cellular aging, including brain and oligodendrocyte aging. Elevated GDF15, a canonical aging marker, was the top protein correlate of earlier menopause. We observed robust replication of menopause timing proteomic shifts in the Women's Health Initiative Long Life Study (N=1,210). In UKB, proteins associated with earlier menopause, including GDF15, exhibited concordant associations with incident dementia risk and brain atrophy, cerebral small vessel disease burden, and white matter microstructural integrity. Collectively, our findings identify proteomic signatures linking ovarian aging to brain aging, providing a framework to inform interventions to reduce dementia risk.
Gunnarsson, C.; Ellegard, R.; Ahsberg, J.; huda, s.; Andersson, J.; Dworeck, C. F.; Glaser, N.; Erlinge, D.; Loghman, H.; Johnston, N.; Mannila, M.; Pagonis, C.; Ravn-Fischer, A.; Rydberg, E.; Welen Schef, K.; Tornvall, P.; Sederholm Lawesson, S.; Swahn, E. E.
Show abstract
Abstract Background Spontaneous coronary artery dissection (SCAD) is a well-recognised cause of acute coronary syndrome particularly among women without conventional cardiovascular risk factors. Increasing evidence indicates a genetic contribution; however, the underlying genetic architecture of SCAD remains insufficiently understood. Objective The aim of this study was to assess the prevalence of rare variants in previously reported SCAD associated genes and to explore the potential presence of novel genetic alterations in well-characterised Swedish patients with SCAD. Methods The study comprised 201 patients enrolled in SweSCAD, a national project examining the clinical characteristics, aetiology, and outcomes of SCAD. All individuals had a confirmed diagnosis based on invasive coronary angiography. Comprehensive exome sequencing was performed to identify rare variants contributing to disease susceptibility. Results Genetic variants that have been associated with SCAD according to current clinical genetics practice for variant reporting were identified in approximately 4 % of patients. In addition, rare potentially relevant variants were detected in almost 60 % of patients in genes associated with vascular integrity and vascular remodelling. Conclusion This study supports SCAD as a genetically complex arteriopathy, driven by rare high?impact variants together with broader polygenic susceptibility. Variants in collagen, vascular extracellular matrix, and oestrogen?responsive pathways provide biologically plausible links to female?predominant disease. Although the diagnostic yield of clearly actionable variants is modest, these findings support broader genomic evaluation beyond overt syndromic presentations and highlight the need for larger integrative genomic and functional studies to refine risk stratification and management.
Abal, A.; Apako, J.; Hurberd, Y.; Flipse, J.; Bastiaens, G.; Schaftenaar, E.
Show abstract
Objectives: To evaluate whether on-site molecular point-of-care testing (POCT) for Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) is associated with reduced antibiotic overtreatment for presumed sexually transmitted infections (STIs) among adults living with HIV in rural Uganda. Methods: We conducted a single-site quasi-experimental pre-post intervention study at Kumi Hospital, comparing syndromic management (April-August 2024) with CT/NG POCT-guided management (September 2024-January 2025). Adults living with HIV presenting with symptoms suggestive of an STI were included. Overtreatment in the pre-intervention phase was estimated by comparing antibiotic prescribing with the expected number of CT/NG infections based on positivity observed during the intervention phase. Results: A total of 404 participants were included (203 pre-intervention, 201 intervention). During the intervention phase, CT and/or NG were detected in 14 individuals (7.0%). Median test turnaround time was 95 minutes, enabling same-day treatment in 93% of positive cases. Antibiotic prescribing decreased from 99.0% to 11.4% following POCT implementation (P < 0.001), corresponding to an absolute reduction of 87.6 percentage points. Estimated overtreatment declined from 30.0% to 5.0% for NG and from 74.9% to 6.0% for CT (both P < 0.001). Conclusions: Implementation of CT/NG POCT in routine HIV care was associated with a marked reduction in antibiotic prescribing and estimated overtreatment for presumed STIs. These findings support the potential of POCT-guided, aetiology-based STI management to reduce unnecessary antimicrobial exposure in settings where syndromic management remains standard practice.